Latent Variable Models for Neural Data
نویسنده
چکیده
iv v Acknowledgements Many thanks are due to my advisors, Richard Andersen and John Hoppeld. Richard has been tremendously supportive throughout, providing valuable advice and encouragement, even when my work has taken me outside the domains typically trodden by members of his group. Furthermore he has assembled an exceptional group of people in whose company I have worked for the last ve years. John graciously provided an intellectual home away from home; as well as invaluable advice and mentoring on a number of thorny research and professional issues. The work on spike sorting that is presented in chapter 5 is a component of a larger multiunit recording project within Richard Andersen's laboratory on which John Pezaris and I have collaborated for the past ve years. This has been a tremendously fruitful and rewarding collaboration. I could not have wished for a more able or more generous colleague. The analysis of chapter 6 was also a collaborative eeort, in this case with Jennifer Linden. This too was an extremely rewarding eeort; in both the intellectual and personal domains my debt to Jennifer is immeasurable. A number of critical conversations have helped to shape the work that appears here. In the rst place, the fact that I chose to study neuroscience at all is largely due to the encouragement of Roweis introduced me to the EM algorithm which forms the core thread running throughout this thesis. Bill Bialek steered me towards the simple greedy approach to spike detection that appears in section 5.13. Tali Tishby brought earlier work on deterministic annealing to my attention, from which the REM algorithm of chapter 3 grew. Sanjoy Mahajan convinced me to take a closer look at the possibility of modeling bursting with hidden Markov models (section 5.10.2). Alan Barr provided a piece of inspirational advice at a critical juncture (see the Preface). Ken Miller and his group at UCSF have been working on the problem of spike sorting over the last few years as well, and we have enjoyed very many useful exchanges of ideas. Besides these speciic points, innumerable conversations with my peers at Caltech have contributed to my understanding involved. Special thanks are due to Sam Roweis and Flip Sabes, who provided crucial support as I made my initial faltering steps into the realm of machine learning and statistics. Members of the Hoppeld group, particularly Sam Roweis, Carlos Brody, Erik Winfree and Sanjoy …
منابع مشابه
Using multivariate generalized linear latent variable models to measure the difference in event count for stranded marine animals
BACKGROUND AND OBJECTIVES: The classification of marine animals as protected species makes data and information on them to be very important. Therefore, this led to the need to retrieve and understand the data on the event counts for stranded marine animals based on location emergence, number of individuals, behavior, and threats to their presence. Whales are g...
متن کاملLeveraging the Exact Likelihood of Deep Latent Variable Models
Deep latent variable models combine the approximation abilities of deep neural networks and the statistical foundations of generative models. The induced data distribution is an infinite mixture model whose density is extremely delicate to compute. Variational methods are consequently used for inference, following the seminal work of Rezende et al. (2014) and Kingma & Welling (2014). We study t...
متن کاملLatent Subspace Clustering based on Deep Neural Networks
Clustering approaches have been widely used in process control community for unsupervised classification beneficial for further analysis, modeling and optimization. Process data generally involve far more dimensions than needed; this phenomenon is called as ”data rich but information poor” and becomes obstacles for reasonable classification. Therefore, it is desirable to use latent variable mod...
متن کاملAsymptotic accuracy of Bayesian estimation for a single latent variable
In data science and machine learning, hierarchical parametric models, such as mixture models, are often used. They contain two kinds of variables: observable variables, which represent the parts of the data that can be directly measured, and latent variables, which represent the underlying processes that generate the data. Although there has been an increase in research on the estimation accura...
متن کاملAccuracy of latent-variable estimation in Bayesian semi-supervised learning
Hierarchical probabilistic models, such as Gaussian mixture models, are widely used for unsupervised learning tasks. These models consist of observable and latent variables, which represent the observable data and the underlying data-generation process, respectively. Unsupervised learning tasks, such as cluster analysis, are regarded as estimations of latent variables based on the observable on...
متن کاملDeep Exponential Families
We describe deep exponential families (DEFs), a class of latent variable models that are inspired by the hidden structures used in deep neural networks. DEFs capture a hierarchy of dependencies between latent variables, and are easily generalized to many settings through exponential families. We perform inference using recent “black box” variational inference techniques. We then evaluate variou...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1999